Goto

Collaborating Authors

 data repository





Benchmark Data Repositories for Better Benchmarking

Neural Information Processing Systems

In machine learning research, it is common to evaluate algorithms via their performance on standard benchmark datasets. While a growing body of work establishes guidelines for---and levies criticisms at---data and benchmarking practices in machine learning, comparatively less attention has been paid to the data repositories where these datasets are stored, documented, and shared. In this paper, we analyze the landscape of these and the role they can play in improving benchmarking. This role includes addressing issues with both datasets themselves (e.g., representational harms, construct validity) and the manner in which evaluation is carried out using such datasets (e.g., overemphasis on a few datasets and metrics, lack of reproducibility). To this end, we identify and discuss a set of considerations surrounding the design and use of benchmark data repositories, with a focus on improving benchmarking practices in machine learning.






Curate, Connect, Inquire: A System for Findable Accessible Interoperable and Reusable (FAIR) Human-Robot Centered Datasets

Zhou, Xingru, Modak, Sadanand, Chan, Yao-Cheng, Deng, Zhiyun, Sentis, Luis, Esteva, Maria

arXiv.org Artificial Intelligence

--The rapid growth of AI in robotics has amplified the need for high-quality, reusable datasets, particularly in human-robot interaction (HRI) and AI-embedded robotics. While more robotics datasets are being created, the landscape of open data in the field is uneven. This is due to a lack of curation standards and consistent publication practices, which makes it difficult to discover, access, and reuse robotics data. T o address these challenges, this paper presents a curation and access system with two main contributions: (1) a structured methodology to curate, publish, and integrate F AIR (Findable, Accessible, Interoperable, Reusable) human-centered robotics datasets; and (2) a ChatGPT -powered conversational interface trained with the curated datasets metadata and documentation to enable exploration, comparison robotics datasets and data retrieval using natural language. Developed based on practical experience curating datasets from robotics labs within T exas Robotics at the University of T exas at Austin, the system demonstrates the value of standardized curation and persistent publication of robotics data. The system's evaluation suggests that access and understandability of human-robotics data are significantly improved. This work directly aligns with the goals of the HCRL @ ICRA 2025 workshop and represents a step towards more human-centered access to data for embodied AI. I. INTRODUCTION The rise of AI-embedded robotics has made the need for high-quality datasets for varied training applications critical. In response, researchers are increasingly creating datasets specifically for usage in AI applications. Derived from complex and often interdisciplinary studies using mixed research methods, these often large and multimodal datasets reflect both the robots' and the humans' perspectives; some gathered in the context of carefully designed experiments and others during observations in the physical world.


Unreflected Use of Tabular Data Repositories Can Undermine Research Quality

Tschalzev, Andrej, Purucker, Lennart, Lüdtke, Stefan, Hutter, Frank, Bartelt, Christian, Stuckenschmidt, Heiner

arXiv.org Artificial Intelligence

Data repositories have accumulated a large number of tabular datasets from various domains. Machine Learning researchers are actively using these datasets to evaluate novel approaches. Consequently, data repositories have an important standing in tabular data research. They not only host datasets but also provide information on how to use them in supervised learning tasks. In this paper, we argue that, despite great achievements in usability, the unreflected usage of datasets from data repositories may have led to reduced research quality and scientific rigor. We present examples from prominent recent studies that illustrate the problematic use of datasets from OpenML, a large data repository for tabular data. Our illustrations help users of data repositories avoid falling into the traps of (1) using suboptimal model selection strategies, (2) overlooking strong baselines, and (3) inappropriate preprocessing. In response, we discuss possible solutions for how data repositories can prevent the inappropriate use of datasets and become the cornerstones for improved overall quality of empirical research studies. In tabular data research, the OpenML repository is used extensively (Gijsbers et al., 2019; Salinas & Erickson, 2024; Liu et al., 2024; Hollmann et al., 2025). A driving factor for tabular data repository usage is the recent increase in efforts to transfer the success of deep learning to the tabular domain. The development of novel neural network models (Arik & Pfister, 2021; Chang et al., 2021; Gorishniy et al., 2021; 2023; 2024), and more recently tabular foundation models (Gardner et al., 2024; Hollmann et al., 2025) dominates the tabular machine learning community. In response, recent comparative studies try to gather as many datasets as possible to facilitate a rigorous and comprehensive evaluation of novel approaches (Grinsztajn et al., 2022; McElfresh et al., 2023; Ye et al., 2024a). While McElfresh et al. (2023) used 196 datasets, a recent study scales up to 300 datasets from OpenML (Ye et al., 2024a). Similarly, studies evaluating foundation models seem to include as many datasets from these benchmarks as possible, apparently taking their quality and appropriateness for granted (Yan et al., 2024; Gardner et al., 2024). Different authors have recently criticized the intense focus on model development and the limited attention to data quality. Existing benchmarks often use outdated data (Kohli et al., 2024), ignore task-specific preprocessing (Tschalzev et al., 2024), or use inappropriate data splits (Rubachev et al., 2024).